Good morning. It's time for AI.
Last week we mainly talked about rational agents as a framework in which I would like
to place all the algorithms that we're going to look at from now on.
I'll try to bind this back into the agents, but what you should think about is that, for
instance, the search algorithms actually are the deliberative component in such an agent.
If we find a solution, then we can actually work on doing the actions that this plan suggests.
We have looked at a variety of agents, a variety of environments they can act in.
The last thing we looked at, which is where we started on Thursday, was looking at three
fundamentally different ways of representing the environment.
We looked at three of them. One is basically having atomic environment representations,
which is basically just giving an environment a name.
Black box representation, we cannot look into it, we cannot see or reason with the state of the
environment. We only know this is an environment and this environment comes after that environment,
essentially. The next thing is something we call factored, where you have a semi-transparent
representations, where you actually have a couple of slots. These slots can have values.
You might know that you have an environment in which the weather is foggy. We might have
an environment where there's 53 students and so on. You know certain things about the environment,
but only things you can basically, you have a finite number of attributes that you use in
that environment description, which can have values and those you can reason about.
We will see that this allows a completely different set of algorithms that perform better
with these atomic environments. If you think about this, essentially every combination of values in
such a factored representation corresponds to one environment here. We have to look at much
fewer things and that's something we'll probably see this week. If you could look into these
environments, you would have more guidance of what to do next. We're trading essentially
complexity. This is wonderfully simple, but we're trading complexity for numbers of environments
and more guidance. That's essentially where the algorithms will differ. The third kind of
environments we call structured, where you basically have full flexibility in describing
the environment. Human agents use all kinds of environments for communication. You might fill
out summary forms about certain things. I might describe my teaching environment as the course is,
or if you just look at Univis, you have a couple of slots there, when, where, who, for whom,
prerequisites, course description, literature or something like this. That's a factored representation
where a fully structured representation would be something where I would tell somebody a story. I
have this AI course and could you imagine I have last week only had students who sat in the first
three rows. There's not going to be in Univis a slot number of rows that contain the students
today. I would need a fairly general mechanism to actually write that down. But it might be
important for certain things. I think I had the example of the truck blocking the road because
there's a loose cow in a driveway or something like this. Those are not things you put into
forms where you fill out values or something. Okay, so, end of rational agents, we're looking
at a certain class of algorithms that can work with atomic state representations. The main thing
we looked at was the prerequisites for running these algorithms. The prerequisite is that we
can describe the world or the environment atomically, which means we can write down a
couple of states, states we do not have to look into, and a set of actions or a relation,
which is the successive relation. Mathematically, the same thing. The whole setup will be that if
you can do this, if you can formulate your problem using atomic states and actions and little twiddly
bits which we looked at, then you can run the search algorithms we're going to look at in a
little bit more detail. Really, the search algorithms are relatively simple and you've
seen them before. I will just briefly go through them assuming that you did. The real step in this
is somebody, usually a human agent designer, actually sits down and looks at the problem
and encodes it in a state action for a relation. Then you take the algorithm from the shelf,
bang hard on it, and if you're lucky, something good comes out. An example we'll be looking at
Presenters
Zugänglich über
Offener Zugang
Dauer
01:25:38 Min
Aufnahmedatum
2018-11-14
Hochgeladen am
2018-11-14 19:50:53
Sprache
en-US